Feed aggregator

Apple is “excited” about the potential of self-driving cars

Ars Technica - Sun, 04/12/2016 - 20:20

(credit: Andrew Cunningham)

Conflicting rumors of Apple's connected car plans have been swirling for some time. But a new letter written by Apple's director of product integrity Steve Kenner to the National Highway Traffic Safety Administration (NHTSA) sheds some light into the company's plans. In the letter, Kenner writes that Apple is "excited" about the potential of automated transportation and that the company is "investing heavily" into machine learning that could support such systems.

"Apple uses machine learning to make its products and services smarter, more intuitive, and more personal," Kenner states in the letter. "The company is investing heavily in the study of machine learning and automation, and is excited about the potential of automated systems in many areas, including transportation."

Apple states that companies making self-driving vehicles and connected cars should follow "rigorous safety principles," however those rules shouldn't prevent companies from making "consequential progress." Also, the letter emphasizes the necessity of sharing "crash and near-misses" data to improve this technology, but that shouldn't compromise user privacy.

Read 4 remaining paragraphs | Comments

Yooka-Laylee first impressions: A rush job, but polished in the right places

Ars Technica - Sun, 04/12/2016 - 19:00

Enlarge / Yooka (the lizard) and Laylee (the bat) run around their game's opening level, Tribalstack Tropics. (credit: Playtonic Games)

ANAHEIM, California—Upcoming video game Yooka-Laylee is set to bring the 3D platformer genre back in a big way next year, but can it live up to high expectations? The game’s team of ex-Rare developers charmed fans into coughing up £2.1 million of crowdfunded money last year, mostly on the promise of reviving the glory of Banjo-Kazooie. Are we anywhere near a true “Banjo-Threeie” here?

That’s a tough question to answer after only a 20-minute demo, which I got to test at this weekend's PlayStation Experience event. For now, my dive into the game’s opening level has revealed a mix of humor, charm, rough production values, and darned good gameplay.

Laylee, ease my worried mind

Gliding over the game’s opening level is honestly as fun as this (admittedly sweetened) screenshot looks.

Yooka-Laylee’s opening world, called Tribalstack Tropics, plays like a heaping helping of N64 platformer comfort-food—with the added juice of modern 3D hardware, of course. After I hopped, ran, and spun over a variety of familiar platforming challenges, I reached the sunny, green level’s mountain peak, and then I was told to jump all the way down. And jump I did—while holding the game’s hover-jump button to glide long and fall far. The game, running on a PlayStation 4, kept draw distances high during this whole sequence, and I was delighted by the sense of scale. (Soon after, I found out I could run into a warping door to get back to the top and hop all over again. Whee!)

Read 6 remaining paragraphs | Comments

Tornado outbreaks are getting more violent

Ars Technica - Sun, 04/12/2016 - 18:00

Enlarge (credit: The National Severe Storms Laboratory)

A tornado may cause localized destruction, but the most severe problems come when a storm system spawns multiple tornadoes. This creates what's called a tornado outbreak, which spreads destruction across a wider area. Now, a new study suggests that the most violent tornado outbreaks are on the rise. But the researchers behind the study see no indication that the rise in tornado outbreaks is connect with our warming climate.

It would make sense for a warming climate to influence tornado activity. After all, higher temperatures mean more energy in the atmosphere, potentially powering the storms. But past studies have produced mixed results when it comes to tornado activity. There's not a significant trend in the number of tornadoes or the frequency of outbreaks (defined as six or more tornadoes that occur in rapid succession). At the same time, tornadoes are occurring in more of the year, and the number of tornadoes in outbreaks has become increasingly variable.

A team of researchers from Columbia University (Michael Tippett, Chiarra Lepore, and Joel Cohen) decided to look at this last figure more carefully. They collected data on the number of storms in outbreaks in the period between 1965 and 2015. While there was no trend in the number of outbreaks, the number of tornadoes per outbreak has gone up across that time period. Not only was the mean number of tornadoes per outbreak going up, but the more extreme outbreaks—the ones with the most storms—were increasing the fastest.

Read 5 remaining paragraphs | Comments

Why do hunters choose not to shoot?

Ars Technica - Sun, 04/12/2016 - 17:00

Enlarge / A red deer in Norway (credit: flickr user: Jörg Hempel)

Hunting animals, like deer, is often important to keeping their population at a reasonable size. In areas where natural predators are few or nonexistent, the only way to control populations of certain species is through human hunting.

Human hunters behave differently from natural predators though. For instance, natural predators aren’t interested in trophy hunting, so they don’t target animals that would look good on their walls. Natural predators also aren’t reluctant to kill the young, whereas human hunters tend to avoid this. And human hunters may make other decisions about what to kill based on factors we don't really understand.

To understand how these factors might influence prey populations, a group of researchers in Norway, Germany, and the Netherlands published a paper that tries to predict hunters’ behavior.

Read 10 remaining paragraphs | Comments

Hear the pulse: Heart rate monitoring fitness earbuds tested

Ars Technica - Sun, 04/12/2016 - 16:00

Video shot/edited by Jennifer Hahn. (video link)

This year, more heart rate monitors have made their way into fitness trackers than ever before. All the major companies—Fitbit, Garmin, and Polar, among many others—have made heart rate monitoring more accessible by putting it into devices that cost less than $200 (many of them less than $150). Most of these devices are wristband wearables—but as 2016 ends and 2017 approaches, audio giants are getting into the mix. Workout headphones and earbuds have been around for a while, but now big names including Bose and JBL are making fitness earbuds that also track heart rate.

Why the ears?

You have the right to be skeptical about pulse-sensing earbuds. Before we get into why earbud-based monitors are becoming more prevalent, let's take a look at your current options. Most of the heart rate monitors widely available now are in chest straps or wrist-based wearables. The former is considered to be more accurate most of the time since straps are secured to the torso and close to your heart.

Read 45 remaining paragraphs | Comments

The Mysterious Machinery of Creatures That Glow in the Deep

Wired - Sun, 04/12/2016 - 13:00
Bioluminescent organisms have evolved dozens of times over the course of life's history. Recent studies are narrowing in on the complicated biochemistry needed to illuminate the dark. The post The Mysterious Machinery of Creatures That Glow in the Deep appeared first on WIRED.

Never Ever (Ever) Download Android Apps Outside of Google Play

Wired - Sun, 04/12/2016 - 13:00
Tricking people into downloading malicious mobile apps is a con as old as time itself (or at least as old as smartphones). Don't fall for it. The post Never Ever (Ever) Download Android Apps Outside of Google Play appeared first on WIRED.

Review: Ozobot Evo

Wired - Sun, 04/12/2016 - 12:31
This user-programmable bot for children makes playtime educational. The post Review: Ozobot Evo appeared first on WIRED.

Niels Thykier: Piuparts integration in britney

Planet Debian - Sun, 04/12/2016 - 12:06

As of today, britney now fetches reports from piuparts.debian.org and uses it as a part of her evaluation for package migration.  As with her RC bug check, we are only preventing (known) regressions from migrating.

The messages (subject to change) look something like:

  • Piuparts tested OK
  • Rejected due to piuparts regression
  • Ignoring piuparts failure (Not a regression)
  • Cannot be tested by piuparts (not a blocker)

If you want to do machine parsing of the Britney excuses, we also provide an excuses.yaml. In there, you are looking for “excuses[X].policy_info.piuparts.test-results”, which will be one of:

  • pass
  • regression
  • failed
  • cannot-be-tested

Enjoy.

Jo Shields: A quick introduction to Flatpak

Planet Debian - Sun, 04/12/2016 - 11:44

Releasing ISV applications on Linux is often hard. The ABI of all the libraries you need changes seemingly weekly. Hence you have the option of bundling the world, or building a thousand releases to cover a thousand distribution versions. As a case in point, when MonoDevelop started bundling a C Git library instead of using a C# git implementation, it gained dependencies on all sorts of fairly weak ABI libraries whose exact ABI mix was not consistent across any given pair of distro releases. This broke our policy of releasing “works on anything” .deb and .rpm packages. As a result, I pretty much gave up on packaging MonoDevelop upstream with version 5.10.

Around the 6.1 release window, I decided to take re-evaluate question. I took a closer look at some of the fancy-pants new distribution methods that get a lot of coverage in the Linux press: Snap, AppImage, and Flatpak.

I started with AppImage. It’s very good and appealing for its specialist areas (no external requirements for end users), but it’s kinda useless at solving some of our big areas (the ABI-vs-bundling problem, updating in general).

Next, I looked at Flatpak (once xdg-app). I liked the concept a whole lot. There’s a simple 3-tier dependency hierarchy: Applications, Runtimes, and Extensions. An application depends on exactly one runtime.  Runtimes are root-level images with no dependencies of their own. Extensions are optional add-ons for applications. Anything not provided in your target runtime, you bundle. And an integrated updates mechanism allows for multiple branches and multiple releases parallel-installed (e.g. alpha & stable, easily switched).

There’s also security-related sandboxing features, but my main concerns on a first examination were with the dependency and distribution questions. That said, some users might be happier running Microsoft software on their Linux desktop if that software is locked up inside a sandbox, so I’ve decided to embrace that functionality rather than seek to avoid it.

I basically stopped looking at this point (sorry Snap!). Flatpak provided me with all the functionality I wanted, with an extremely helpful and responsive upstream. I got to work on trying to package up MonoDevelop.

Flatpak (optionally!) uses a JSON manifest for building stuff. Because Mono is still largely stuck in a Gtk+2 world, I opted for the simplest runtime, org.freedesktop.Runtime, and bundled stuff like Gtk+ into the application itself.

Some gentle patching here & there resulted in this repository. Every time I came up with an exciting new edge case, upstream would suggest a workaround within hours – or failing that, added new features to Flatpak just to support my needs (e.g. allowing /dev/kvm to optionally pass through the sandbox).

The end result is, as of the upcoming 0.8.0 release of Flatpak, from a clean install of the flatpak package to having a working MonoDevelop is a single command: flatpak install --user --from https://download.mono-project.com/repo/monodevelop.flatpakref 

For the current 0.6.x versions of Flatpak, the user also needs to flatpak remote-add --user --from gnome https://sdk.gnome.org/gnome.flatpakrepo first – this step will be automated in 0.8.0. This will download org.freedesktop.Runtime, then com.xamarin.MonoDevelop; export icons ‘n’ stuff into your user environment so you can just click to start.

There’s some lingering experience issues due the sandbox which are on my radar. “Run on external console” doesn’t work, for example, or “open containing folder”. There are people working on that (a missing DBus# feature to allow breaking out of the sandbox). But overall, I’m pretty happy. I won’t be entirely satisfied until I have something approximating feature equivalence to the old .debs.  I don’t think that will ever quite be there, since there’s just no rational way to allow arbitrary /usr stuff into the sandbox, but it should provide a decent basis for a QA-able, supportable Linux MonoDevelop. And we can use this work as a starting point for any further fancy features on Linux.

Watch Captain Marvel kick butt in Marvel vs. Capcom Infinite gameplay premiere

Ars Technica - Sun, 04/12/2016 - 07:16

Marvel vs. Capcom Infinite: Gameplay reveal trailer

ANAHEIM, Calilfornia—After a week of teases and leaks, Capcom confirmed on Saturday that its long-running fighting series, Marvel vs. Capcom, would receive a sequel in 2017. The announcement came during the kickoff panel at this weekend's PlayStation Experience expo, but the cooler stuff came later at the evening's Street Fighter V world finals tournament.

The crossover sequel, dubbed Marvel vs. Capcom Infinite, received its world premiere gameplay trailer on Saturday night, and it was introduced by Street Fighter V director Yoshinori Ono. "After you watch this, you might not be able to go to sleep tonight," Ono told the crowd.

Captain Marvel? More like Captain Marvelous.

The 1:30 trailer might not have been insomnia-inducing, but it was definitely far from a lullaby. The game now only lets players create teams of two, as opposed to the prior games' three-on-three fights, and the trailer showed Capcom favorites Mega Man and Ryu squaring off against Marvel superheroes Iron Man and Captain Marvel.

Read 4 remaining paragraphs | Comments

The FireBee: modern Atari clone

OS news - Sun, 04/12/2016 - 02:16
The FireBee is a new Atari-compatible computer. Ataris and Atari-Clones are special computers with their own hard & software. They aren't PC's, Mac's nor Amiga compatible. A FireBee is similar to an Atari Falcon and works very much like that. It will run most of the Atari compatible software that would run on a Falcon. Different to older Ataris and their clones, the FireBee is a modern computer that supports almost everything you'd expect from a today's machine, like USB ports, Ethernet, DVI-I monitor connector, SD-card reader and more. This brand-new Atari compatible is not cheap, but much like the current Amiga computers, if you're worried about the price, you're probably not the intended audience. Note that even though the order page says "pre-order", I think that's a typo - you can order them directly from the Swiss company that makes them, too. I love that people and companies are passionate enough to keep developing, building, and selling machines like this - it's a vital effort to keep platforms alive well into the future.

The New Guardians of the Galaxy Vol. 2 Trailer Is Groot Stuff

Wired - Sun, 04/12/2016 - 00:59
2017—aka the year of our Star-Lord—can't come soon enough. The post The New Guardians of the Galaxy Vol. 2 Trailer Is Groot Stuff appeared first on WIRED.

Ben Hutchings: Linux Kernel Summit 2016, part 1

Planet Debian - Sun, 04/12/2016 - 00:54

I attended this year's Linux Kernel Summit in Santa Fe, NM, USA and made notes on some of the sessions that were relevant to Debian. LWN also reported many of the discussions. This is the first of two parts of my notes; part 2 is here.

Stable process

Jiri Kosina, in his role as a distribution maintainer, sees too many unsuitable patches being backported - e.g. a fix for a bug that wasn't present or a change that depends on an earlier semantic change so that when cherry-picked it still compiles but isn't quite right. He thinks the current review process is insufficient to catch them.

As an example, a recent fix for a minor information leak (CVE-2016-9178) depended on an earlier change to page fault handling. When backported by itself, it introduced a much more serious security flaw (CVE-2016-9644). This could have been caught very quickly by a system call fuzzer.

Possible solutions: require 'Fixes' field, not just 'Cc: stable'. Deals with 'bug wasn't present', but not semantic changes.

There was some disagreement whether 'Fixes' without 'Cc: stable' should be sufficient for inclusion in stable. Ted Ts'o said he specifically does that in some cases where he thinks backporting is risky. Greg Kroah-Hartman said he takes it as a weaker hint for inclusion in stable.

Is it a good idea to keep 'Cc: stable' given the risk of breaking embargo? On balance, yes, it only happened once.

Sometimes it's hard to know exactly how/when the bug was introduced. Linus doesn't want people to guess and add incorrect 'Fixes' fields. There is still the option to give some explanation and hints for stable maintainers in the commit message. Ideally the upstream developer should provide a test case for the bug.

Is Linus happy?

Linus complained about minor fixes coming later in the release cycle. After rc2, all fixes should either be for new code introduced in the current release cycle or for important bugs. However, new, production-ready drivers without new infrastructure dependencies are welcome at almost any point in the release cycle.

He was unhappy about some big changes in RDMA, but I'm not sure what those were.

Bugzilla and bug tracking

Laura Abbott started a discussion of bugzilla.kernel.org, talking about subsystems where maintainers ignore it and any responses come from random people giving bad advice. This is a terrible experience for users. Several maintainers are actively opposed to using it, and the email bridge no longer works (or not well?). She no longer recommends Fedora bug submitters to submit reports there.

Are there any alternatives? None were proposed.

Someone asked whether Bugzilla could tell reporters to use email for certain products/components instead of continuing with the bug entry process.

Konstantin Ryabitsev talked about the difficulty of upgrading a customised instance of Bugzilla. Much customisation requires patches which don't apply to next version (maybe due to limitations of the extension mechanism?). He has had to drop many such patches.

Email is hard to track when a bug is handed over from one maintainer to another. Email archives are very unreliable. Linus: I'll take Bugzilla over mail-archive.

No-one is currently keeping track of bugs across the kernel and making sure they get addressed by an appropriate maintainer. It's (at least) a full-time job but no individual company has business case for paying for this. Konstantin suggested (I think) that CII might pay for this.

There was some discussion of what information should be included in a bug report. The Cut here line in oops messages was said to be a mistake because there are often relevant messages before it. The model of computer is often important. Beyond that, there was not much interest in the automated information gathering that distributions do. Distribution maintainers should curate bugs before forwarding upstream.

There was a request for custom fields per component in Bugzilla. Konstantin says this is doable (possibly after upgrade to version 5); it doesn't require patches.

The future of the Kernel Summit

The kernel community is growing, and the invitation list for the core day is too small to include all the right people for technical subjects. For 2017, the core half-day will have an even smaller invitation list, only ~30 subsystem maintainers that Linus pulls from. The entire technical track will be open (I think).

Kernel Summit 2017 and some mini-summits will be held in Prague alongside Open Source Summit Europe (formerly LinuxCon Europe) and Embedded Linux Conference Europe. There were some complaints that LinuxCon is not that interesting to kernel developers, compared to Linux Plumbers Conference (which followed this year's Kernel Summit). However, the Linux Foundation is apparently soliciting more hardcore technical sessions.

Kernel Summit and Linux Plumbers Conference are quite small, and it's apparently hard to find venues for them in cities that also have major airports. It might be more practical to co-locate them both with Open Source Summit in future.

time_t and 2038

On 32-bit architectures the kernel's representation of real time (time_t etc.) will break in early 2038. Fixing this in a backward-compatible way is a complex problem.

Arnd Bergmann presented the current status of this process. There has not yet been much progress in mainline, but more fixes have been prepared. The changes to struct inode and to input events are proving to be particularly problematic. There is a need to add new system calls, and he intends to add these for all (32-bit) achitectures at once.

Copyright retention

James Bottomley talked about how developers can retain copyright on their contributions. It's hard to renegotiate within an existing employment; much easier to do this when preparing to sign a new contract.

Some employers expect you to fill in a document disclosing 'prior inventions' you have worked on. Depending on how it's worded, this may require the employer to negotiate with you again whenever they want you to work on that same software.

It's much easier for contractors to retain copyright on their work - customers expect to have a custom agreement and don't expect to get copyright on contractor's software.

Vincent Bernat: Build-time dependency patching for Android

Planet Debian - Sat, 03/12/2016 - 23:20

This post shows how to patch an external dependency for an Android project at build-time with Gradle. This leverages the Transform API and Javassist, a Java bytecode manipulation tool.

buildscript { dependencies { classpath 'com.android.tools.build:gradle:2.2.+' classpath 'com.android.tools.build:transform-api:1.5.+' classpath 'org.javassist:javassist:3.21.+' classpath 'commons-io:commons-io:2.4' } }

Disclaimer: I am not a seasoned Android programmer, so take this with a grain of salt.

Context§

This section adds some context to the example. Feel free to skip it.

Dashkiosk is an application to manage dashboards on many displays. It provides an Android application you can install on one of those cheap Android sticks. Under the table, the application is an embedded webview backed by the Crosswalk Project web runtime which brings an up-to-date web engine, even for older versions of Android1.

Recently, a security vulnerability has been spotted in how invalid certificates were handled. When a certificate cannot be verified, the webview defers the decision to the host application by calling the onReceivedSslError() method:

Notify the host application that an SSL error occurred while loading a resource. The host application must call either callback.onReceiveValue(true) or callback.onReceiveValue(false). Note that the decision may be retained for use in response to future SSL errors. The default behavior is to pop up a dialog.

The default behavior is specific to Crosswalk webview: the Android builtin one just cancels the load. Unfortunately, the fix applied by Crosswalk is different and, as a side effect, the onReceivedSslError() method is not invoked anymore2.

Dashkiosk comes with an option to ignore TLS errors3. The mentioned security fix breaks this feature. The following example will demonstrate how to patch Crosswalk to recover the previous behavior4.

Simple method replacement§

Let’s replace the shouldDenyRequest() method from the org.xwalk.core.internal.SslUtil class with this version:

// In SslUtil class public static boolean shouldDenyRequest(int error) { return false; } Transform registration§

Gradle Transform API enables the manipulation of compiled class files before they are converted to DEX files. To declare a transform and register it, include the following code in your build.gradle:

import com.android.build.api.transform.Context import com.android.build.api.transform.QualifiedContent import com.android.build.api.transform.Transform import com.android.build.api.transform.TransformException import com.android.build.api.transform.TransformInput import com.android.build.api.transform.TransformOutputProvider import org.gradle.api.logging.Logger class PatchXWalkTransform extends Transform { Logger logger = null; public PatchXWalkTransform(Logger logger) { this.logger = logger } @Override String getName() { return "PatchXWalk" } @Override Set<QualifiedContent.ContentType> getInputTypes() { return Collections.singleton(QualifiedContent.DefaultContentType.CLASSES) } @Override Set<QualifiedContent.Scope> getScopes() { return Collections.singleton(QualifiedContent.Scope.EXTERNAL_LIBRARIES) } @Override boolean isIncremental() { return true } @Override void transform(Context context, Collection<TransformInput> inputs, Collection<TransformInput> referencedInputs, TransformOutputProvider outputProvider, boolean isIncremental) throws IOException, TransformException, InterruptedException { // We should do something here } } // Register the transform android.registerTransform(new PatchXWalkTransform(logger))

The getInputTypes() method should return the set of types of data consumed by the transform. In our case, we want to transform classes. Another possibility is to transform resources.

The getScopes() method should return a set of scopes for the transform. In our case, we are only interested by the external libraries. It’s also possible to transform our own classes.

The isIncremental() method returns true because we support incremental builds.

The transform() method is expected to take all the provided inputs and copy them (with or without modifications) to the location supplied by the output provider. We didn’t implement this method yet. This causes the removal of all external dependencies from the application.

Noop transform§

To keep all external dependencies unmodified, we must copy them:

@Override void transform(Context context, Collection<TransformInput> inputs, Collection<TransformInput> referencedInputs, TransformOutputProvider outputProvider, boolean isIncremental) throws IOException, TransformException, InterruptedException { inputs.each { it.jarInputs.each { def jarName = it.name def src = it.getFile() def dest = outputProvider.getContentLocation(jarName, it.contentTypes, it.scopes, Format.JAR); def status = it.getStatus() if (status == Status.REMOVED) { // ❶ logger.info("Remove ${src}") FileUtils.delete(dest) } else if (!isIncremental || status != Status.NOTCHANGED) { // ❷ logger.info("Copy ${src}") FileUtils.copyFile(src, dest) } } } }

We also need two additional imports:

import com.android.build.api.transform.Status import org.apache.commons.io.FileUtils

Since we are handling external dependencies, we only have to manage JAR files. Therefore, we only iterate on jarInputs and not on directoryInputs. There are two cases when handling incremental build: either the file has been removed (❶) or it has been modified (❷). In all other cases, we can safely assume the file is already correctly copied.

JAR patching§

When the external dependency is the Crosswalk JAR file, we also need to modify it. Here is the first part of the code (replacing ❷):

if ("${src}" ==~ ".*/org.xwalk/xwalk_core.*/classes.jar") { def pool = new ClassPool() pool.insertClassPath("${src}") def ctc = pool.get('org.xwalk.core.internal.SslUtil') // ❸ def ctm = ctc.getDeclaredMethod('shouldDenyRequest') ctc.removeMethod(ctm) // ❹ ctc.addMethod(CtNewMethod.make(""" public static boolean shouldDenyRequest(int error) { return false; } """, ctc)) // ❺ def sslUtilBytecode = ctc.toBytecode() // ❻ // Write back the JAR file // … } else { logger.info("Copy ${src}") FileUtils.copyFile(src, dest) }

We also need the following additional imports to use Javassist:

import javassist.ClassPath import javassist.ClassPool import javassist.CtNewMethod

Once we have located the JAR file we want to modify, we add it to our classpath and retrieve the class we are interested in (❸). We locate the appropriate method and delete it (❹). Then, we add our custom method using the same name (❺). The whole operation is done in memory. We retrieve the bytecode of the modified class in ❻.

The remaining step is to rebuild the JAR file:

def input = new JarFile(src) def output = new JarOutputStream(new FileOutputStream(dest)) // ❼ input.entries().each { if (!it.getName().equals("org/xwalk/core/internal/SslUtil.class")) { def s = input.getInputStream(it) output.putNextEntry(new JarEntry(it.getName())) IOUtils.copy(s, output) s.close() } } // ❽ output.putNextEntry(new JarEntry("org/xwalk/core/internal/SslUtil.class")) output.write(sslUtilBytecode) output.close()

We need the following additional imports:

import java.util.jar.JarEntry import java.util.jar.JarFile import java.util.jar.JarOutputStream import org.apache.commons.io.IOUtils

There are two steps. In ❼, all classes are copied to the new JAR, except the SslUtil class. In ❽, the modified bytecode for SslUtil is added to the JAR.

That’s all! You can view the complete example on GitHub.

More complex method replacement§

In the above example, the new method doesn’t use any external dependency. Let’s suppose we also want to replace the sslErrorFromNetErrorCode() method from the same class with the following one:

import org.chromium.net.NetError; import android.net.http.SslCertificate; import android.net.http.SslError; // In SslUtil class public static SslError sslErrorFromNetErrorCode(int error, SslCertificate cert, String url) { switch(error) { case NetError.ERR_CERT_COMMON_NAME_INVALID: return new SslError(SslError.SSL_IDMISMATCH, cert, url); case NetError.ERR_CERT_DATE_INVALID: return new SslError(SslError.SSL_DATE_INVALID, cert, url); case NetError.ERR_CERT_AUTHORITY_INVALID: return new SslError(SslError.SSL_UNTRUSTED, cert, url); default: break; } return new SslError(SslError.SSL_INVALID, cert, url); }

The major difference with the previous example is that we need to import some additional classes.

Android SDK import§

The classes from the Android SDK are not part of the external dependencies. They need to be imported separately. The full path of the JAR file is:

androidJar = "${android.getSdkDirectory().getAbsolutePath()}/platforms/" + "${android.getCompileSdkVersion()}/android.jar"

We need to load it before adding the new method into SslUtil class:

def pool = new ClassPool() pool.insertClassPath(androidJar) pool.insertClassPath("${src}") def ctc = pool.get('org.xwalk.core.internal.SslUtil') def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode') ctc.removeMethod(ctm) pool.importPackage('android.net.http.SslCertificate'); pool.importPackage('android.net.http.SslError'); // … External dependency import§

We must also import org.chromium.net.NetError and therefore, we need to put the appropriate JAR in our classpath. The easiest way is to iterate through all the external dependencies and add them to the classpath.

def pool = new ClassPool() pool.insertClassPath(androidJar) inputs.each { it.jarInputs.each { def jarName = it.name def src = it.getFile() def status = it.getStatus() if (status != Status.REMOVED) { pool.insertClassPath("${src}") } } } def ctc = pool.get('org.xwalk.core.internal.SslUtil') def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode') ctc.removeMethod(ctm) pool.importPackage('android.net.http.SslCertificate'); pool.importPackage('android.net.http.SslError'); pool.importPackage('org.chromium.net.NetError'); ctc.addMethod(CtNewMethod.make("…")) // Then, rebuild the JAR...

Happy hacking!

  1. Before Android 4.4, the webview was severely outdated. Starting from Android 5, the webview is shipped as a separate component with updates. Embedding Crosswalk is still convenient as you know exactly which version you can rely on. 

  2. I hope to have this fixed in later versions. 

  3. This may seem harmful and you are right. However, if you have an internal CA, it is currently not possible to provide its own trust store to a webview. Moreover, the system trust store is not used either. You also may want to use TLS for authentication only with client certificates, a feature supported by Dashkiosk

  4. Crosswalk being an opensource project, an alternative would have been to patch Crosswalk source code and recompile it. However, Crosswalk embeds Chromium and recompiling the whole stuff consumes a lot of resources. 

In new lawsuit, Instacart shoppers say they were regularly underpaid

Ars Technica - Sat, 03/12/2016 - 19:15

(credit: Kristin Sloan)

On Thursday, 12 Instacart “shoppers” across 11 states filed a proposed US federal class-action lawsuit against the San Francisco startup, alleging a breach of state and federal labor laws.

The Instacart lawsuit is one of several currently targeting so-called “sharing economy” startups, and they all get at the same question: can workers be accurately classified as independent contractors, or should they properly be designated as employees? In Instacart’s case, customers order groceries online, but those groceries are then picked up and delivered by the company’s shoppers. So, should those shoppers be treated as employees?

Classifying such workers as employees rather than contractors would entitle them to a number of benefits under federal law. This includes unemployment benefits, workers’ compensation, the right to unionize, and, most importantly, the right to seek reimbursement for mileage and tips. This reclassification would also incur new and significant costs for Instacart and other affected companies, including Uber and Lyft. An on-demand cleaning service, Homejoy, shut down last year just months after it was hit with a similar labor lawsuit.

Read 11 remaining paragraphs | Comments

Op-ed: Stop pretending there’s a difference between “online” and “real life”

Ars Technica - Sat, 03/12/2016 - 17:00

Enlarge / My face is made of internet. (credit: Fiona Staples)

Sometimes I get into one of those conversations about the Internet where the only way I can reply is to quote from The IT Crowd: "Are you from the past?" I say that every time someone asserts that the online world is somehow separate from real life. You'd be surprised how much this comes up, even after all these years of people's digital shenanigans leading to everything from espionage and murder to international video fame and fancy book deals.

But now that the U.S. has a president-elect who communicates with the American people almost exclusively via Twitter and YouTube, it's really time to stop kidding ourselves. Before the election, many of us (including me) would have shrugged off the fake news stories piling up in the margins of our Facebook feeds. Nobody takes that stuff seriously, right? The election of Donald Trump and several recent tweets from the House Science Committee are two strong pieces of evidence that, yes, people do.

In reality, politics have straddled the digital and meatspace for decades. Though government officials may have just learned about "the cyber," people working in computer security have been dealing with criminal and whimsical incursions into their systems since the late 20th century. It was 1990 when the infamous Operation Sundevil swept up innocents in a massive Secret Service dragnet operation to stop carders. The Stuxnet worm, which affected physical operations of centrifuges at a uranium enrichment plant in Iran, is only the most obvious example of how digital ops can have consequences away from the keyboard.

Read 6 remaining paragraphs | Comments

Kate Rubins just scienced the @$!# out of the International Space Station

Ars Technica - Sat, 03/12/2016 - 16:00

NASA

The International Space Station fills several roles for NASA—providing a toehold in outer space for human activity, testing closed-loop technologies for long-duration spaceflight, and developing international partnerships. But perhaps the station's biggest selling point is science. It was, after all, designated a national laboratory in 2005. And what does a lab need? Scientists.

Yet despite the vastly increased diversity of the astronaut corps since the early, macho days of the Mercury 7, many astronauts today are still fighter pilots, engineers, and surgeons. Relatively few are bonafide research scientists. But Kate Rubins is, and she spent 115 days on the space station this summer and fall. Before becoming an astronaut, Rubins trained in molecular biology and led a laboratory of more than a dozen researchers at the Massachusetts Institute of Technology. She and her team specialized in viruses such as Ebola and Marburg, and their field work took them to Central and Western Africa.

Read 3 remaining paragraphs | Comments

Ars Cardboard’s 2016 board game gift guide

Ars Technica - Sat, 03/12/2016 - 15:00

Enlarge

Welcome to Ars Cardboard, our weekend look at tabletop games. Check out our complete board gaming coverage at cardboard.arstechnica.com.

The frenzied holiday gift-shopping season is now in full swing, and board gamers across the globe are dusting off their Kallax shelves in preparation for the cardboard bounty that surely awaits them. It’s left to you, Friend of the Gamer, to make those dreams come true.

Whether your giftee is a longtime gamer or a brand new convert, Ars Cardboard is here with a list of games to please players of every stripe. We've broken your friends and family into tidy little categories and provided a main pick and some alternatives for each demographic. Our main picks focus on titles released in the last year or two, but we dug into some older titles for our expanded picks. To boot, most games on this list are friendly to tabletop newbies.

Read 49 remaining paragraphs | Comments

Space Photos of the Week: Mosey on Down to the Star Bar

Wired - Sat, 03/12/2016 - 13:00
Space photos of the week, November 28 — December 2, 2016. The post Space Photos of the Week: Mosey on Down to the Star Bar appeared first on WIRED.
Syndicate content