Feed aggregator

Microsoft removes 260 characters for NTFS Path limit

OS news - Mon, 30/05/2016 - 22:14
The maximum length for a path (file name and its directory route) - also known as MAX_PATH - has been defined by 260 characters. But with the latest Windows 10 Insider preview, Microsoft is giving users the ability to increase the limit. The recent most Windows 10 preview is enabling users to change the 260 characters limit. As mentioned in the description, "Enabling NTFS long paths will allow manifested win32 applications and Windows Store applications to access paths beyond the normal 260 char limit per node." Did anyone ever run into this limit? It seems like something that would really be bothersome on servers.

Brain infections may spark Alzheimer’s, new study suggests

Ars Technica - Mon, 30/05/2016 - 22:00

Strands of beta amyloid fibrils form around yeast in culture media. (credit: Credit: D.K.V. Kumar et al. / Science Translational Medicine (2016)])

The protein globs that jam brain circuits in people with Alzheimer’s disease may not result from a sloppy surplus, but rather a bacterial battle, a new study suggests.

Previously, researchers assumed that the protein—beta amyloid—was just a junk molecule that piled up. And efforts to cure Alzheimer’s focused on clearing out clogs and banishing beta amyloid from the brain. But a new study conducted using mice and worms suggests that the protein clumps are actually microbial booby traps, sturdy proteinaceous snares intended to confine invading microbes and protect the brain.

The findings, published in the journal Science Translational Medicine, suggest that Alzheimer’s may result from the brain’s effort to fight off infections. While that hypothesis is controversial and highly speculative at this point, it could dramatically alter the way researchers and doctors work to treat and prevent the degenerative disease.

Read 6 remaining paragraphs | Comments

Oracle's lawyer publishes op-ed on lost case

OS news - Mon, 30/05/2016 - 19:55
After Oracle's expected and well-deserved loss versus Google, Oracle's attorney Annette Hurst published an op-ed about the potential impact of the case on the legal landscape of the software development industry. The op-ed focuses on one particular aspect of Google's position, which author puts as following: [B]ecause the Java APIs have been open, any use of them was justified and all licensing restrictions should be disregarded. In other words, if you offer your software on an open and free basis, any use is fair use. This position, as she claims, puts GPL in jeopardy: common dual-licensing schemes (GPL+proprietary license) depends on developers' ability to enforce the terms of GPL. It is pretty obvious that the danger of this case for the GPL and the open source community is heavily overstated - the amount of attention this case have received is due to the fact that the developer community never really considered header files as copyrightable assets. The whole "GPL in jeopardy" claim, as well as a passage saying that "[n]o copyright expert would have ever predicted [use of header files for reimplementation of an API] would be considered fair", is merely an attempt to deceive readers. The interesting bit is why Oracle's lawyer tries to pose her client's attempt at squeezing some coins from Google as an act of defending the free software community. Does Oracle still think the open source proponents may regard it as an ally, even after Sun's acquisition and the damage it dealt to OpenSolaris, OpenOffice and MySQL projects?

Kernel prepatch 4.7-rc1

LWN.net - Mon, 30/05/2016 - 17:49
Linus has released 4.7-rc1 and closed the merge window for this release, saying "this time around we have a fairly big change to the vfs layer that allows filesystems (if they buy into it) to do readdir() and path component lookup in parallel within the same directory. That's probably the biggest conceptual vfs change we've had since we started doing cached pathname lookups using RCU." The code name has been changed to "Psychotic Stoned Sheep."

If climate scientists are in it for the money, they’re doing it wrong

Ars Technica - Mon, 30/05/2016 - 17:10

It's Memorial Day, all Ars staff is off, and we're grateful for it (running a site remains tough work). But on a normal Monday, inevitably we'd continue to monitor news from the world of climate change. Our John Timmer examined the claims that scientists are in it solely for the money in February 2011, and we're resurfacing his piece for your holiday reading pleasure.

One of the more unfortunate memes that makes an appearance whenever climate science is discussed is the accusation that, by hyping their results, climate scientists are ensuring themselves steady paychecks, and may even be enriching themselves. A Google search for "global warming gravy train" pulls out over 50,000 results (six of them from our forums).

It's tempting to respond with indignation; after all, researchers generally are doing something they love without a focus on compensation. But, more significantly, the accusation simply makes no sense on any level.

Read 18 remaining paragraphs | Comments

A Lot of Weird Stuff Has Been Happening in the Oceans

Wired - Mon, 30/05/2016 - 17:00
Whether the causes are El Nino or the “Blob” or ultimately climate change, The post A Lot of Weird Stuff Has Been Happening in the Oceans appeared first on WIRED.

A gratuitous gallery of warbirds for Memorial Day

Ars Technica - Mon, 30/05/2016 - 17:00

The workhorse of the US Army Air Corp's Eighth Air Force in World War II was the B-17. This one is a B-17G called Shoo Shoo Shoo Baby and it flew 24 combat missions during the war, ending its service after making an emergency landing in Sweden. The Eighth Air Force suffered very heavy casualties during WWII—more than 26,000 personnel lost their lives.

16 more images in gallery

.related-stories { display: none !important; }
Americans have honored those lost in war in some shape or another since just after the Civil War. Memorial Day as we know it—a federal holiday on the last Monday in May—is more recent, dating back to 1968. But the sentiment is the same—remembering those who paid the ultimate price in defense of their country. Since a recent trip happened to take us by the National Museum of the United States Air Force in Dayton, Ohio, we've decided to celebrate it here at Ars by bringing you this gallery of some fine-looking warbirds.

The museum can be found at Wright-Patterson Air Force Base. It's truly vast—even giants of the air like the B-36 and B-52 can seem small underneath the roof of one of its hangars. It also has some rather significant planes in its collection, notably Bockscar, one of the two B-29s that dropped atom bombs on Japan in World War II (the Enola Gay lives at the Smithsonian's Udvar-Hazy collection in Dulles, VA).

The collections under those massive hangars are organized chronologically, from the beginning of flight through World War II, Korea, Vietnam, the Cold War, through to today. Sadly, we weren't able to check out one of the museum's most fascinating aircraft, the remaining North American XB-70 Valkyrie; the new hangar for research and experimental aircraft (and old Air Force Ones) doesn't open until next week.

Read on Ars Technica | Comments

The White House Is Finally Prepping for an AI-Powered Future

Wired - Mon, 30/05/2016 - 17:00
The Obama administration is trying to get a handle on AI before the technology starts to think for itself. The post The White House Is Finally Prepping for an AI-Powered Future appeared first on WIRED.

Review: Yuneec Typhoon H

Wired - Mon, 30/05/2016 - 17:00
How do you beat DJI at the quadcopter game? You build a hexacopter. The post Review: Yuneec Typhoon H appeared first on WIRED.

The spammer who logged into my PC and installed Microsoft Office

Ars Technica - Mon, 30/05/2016 - 16:23

(credit: Aurich Lawson / Thinkstock)

It's Memorial Day, all Ars staff is off, and we're grateful for it (running a site remains tough work). But on a normal Monday, inevitably we'd continue to monitor the security world. Our Jon Brodkin willingly embraced a firsthand experience with low-grade scammers in April 2013, and we're resurfacing his piece for your holiday reading pleasure.

It all began with an annoying text message sent to an Ars reader. Accompanied by a Microsoft Office logo, the message came from a Yahoo e-mail address and read, "Hi, Do u want Microsoft Office 2010. I Can Remotely Install in a Computer."

An offer I couldn't refuse.

The recipient promptly answered "No!" and then got in touch with us. Saying the spam text reminded him of the "'your computer has a virus' scam," the reader noted that "this seems to be something that promises the same capabilities, control of your computer and a request for your credit card info. Has anyone else seen this proposal?"

Read 22 remaining paragraphs | Comments

Game of Thrones Recap: The Power of Swords—and Sanctimony

Wired - Mon, 30/05/2016 - 16:21
The final confrontation might be many episodes away, but the major players vying for the the throne are starting to arm themselves for battle. The post Game of Thrones Recap: The Power of Swords—and Sanctimony appeared first on WIRED.

Should broadband data hogs pay more? ISP economics say “no”

Ars Technica - Mon, 30/05/2016 - 15:45

Don't be stingy guys.

It's Memorial Day, all Ars staff is off, and we're grateful for it (running a site remains tough work). But on a normal Monday, inevitably we'd continue to monitor the world of ISPs—especially how the major players handle big data users. Our Nate Anderson looked at the economic side of the decision in July 2010, and we're resurfacing his piece for your holiday reading pleasure.

Just over a year ago, Time Warner Cable rolled out an experiment in several cities: monthly data limits for Internet usage that ranged from 5GB to 40GB. Data costs money, and consumers would need to start paying their fair share; the experiment seemed to promise an end to the all-you-can-eat Internet buffet at which contented consumers had stuffed themselves for a decade. Food analogies were embraced by the company, with COO Landel Hobbs saying at the time, "When you go to lunch with a friend, do you split the bill in half if he gets the steak and you have a salad?"

In the middle of the controversy, TWC boss Glenn Britt told BusinessWeek something similar, though with less edible imagery. "We need a viable model to be able to support the infrastructure of the broadband business," he said. "We made a mistake early on by not defining our business based on the consumption dimension."

Read 30 remaining paragraphs | Comments

Octopuses may indeed be your new overlords

Ars Technica - Mon, 30/05/2016 - 15:00

A giant pacific octopus shows its colors at the Monterey Bay Aquarium. (credit: Monterey Bay Aquarium)

Over the past 60 years, the population of cephalopods—octopuses, squids, and cuttlefish—has been steadily growing. This is particularly remarkable because many types of marine life have been dying out as carbon levels in the oceans rise, making the water more acidic. So even as numbers of crabs, sea stars, and coral reefs are shrinking, the tentacled creatures of the deep are thriving.

Writing in Current Biology, a large group of marine biologists describe how they discovered this trend. Looking at the past 61 years of fisheries data from all major oceans, they examined numbers of cephalopods that are bycatch, or accidentally caught along with target fish. Using these numbers as a proxy for cephalopod populations as a whole, they discovered a steady increase over the decades, across all cephalopod species. The question is why.

The researchers say it's likely a function of a cephalopod's ability to adapt quickly. "These ecologically and commercially important invertebrates may have benefited from a changing ocean environment," they write. Most cephalopods have very short lifespans and are able to change their behavior very quickly during their lifespans. Indeed, octopuses are tool-users who can learn quickly, leading to many daring escapes from tanks in labs as well as brilliant forms of camouflage at the bottom of the ocean. All these characteristics add up to a set of species who can change on the fly, as their environments are transformed.

Read 2 remaining paragraphs | Comments

Munch, Monet, Michelangelo, and more: High art through a LEGO lens

Ars Technica - Mon, 30/05/2016 - 14:30

Seattle's Pacific Science Center is the latest home to Nathan Sawaya's all-LEGO art exhibit.

42 more images in gallery

.related-stories { display: none !important; }

SEATTLE—We at Ars love a good piece of LEGO design, particularly the fare found at regional fan fests like BrickCon on an annual basis. But while those shows impress with pop-culture references and sprawling towns full of vehicles, spacecraft, ships, and villagers, they don't typically include the kinds of original work or high-art references you'd expect to see at a museum.

Oregon-raised artist Nathan Sawaya, on the other hand, has made art out of LEGOs for years—and shown it off at art galleries across the world since 2007. The artist's latest show, which we caught on its opening weekend in Seattle, continues to revolve around his original creations, which are included in the lower gallery (and will be familiar to anybody who's attended a Sawaya show over the years). But his more recent work has revolved around LEGO recreations of classic paintings and sculptures, which you'll see in this article's upper gallery.

From Monet to Munch, and from Egyptian temples to politically charged Americana, Sawaya's Art of the Brick collection crosses a ton of artistic movements off the LEGO list. You can see all of this and more at the Pacific Science Center until September 11.

Read on Ars Technica | Comments

John Goerzen: That was satisfying

Planet Debian - Mon, 30/05/2016 - 14:29

It’s been awhile due to all sorts of other stuff going on. Nice to see this clogging my inbox again:

It really is satisfying to close bugs!

Kennedy’s vision for NASA inspired greatness, then stagnation

Ars Technica - Mon, 30/05/2016 - 14:00

President Kennedy delivers his "Decision to Go to the Moon" speech on May 25, 1961 before Congress.

14 more images in gallery

.related-stories { display: none !important; }

The spring of 1961 was a time of uncertainty and insecurity in America. The Soviets had beaten the United States to space four years earlier with Sputnik, and in April 1961, they flew Yuri Gagarin into space for a single orbit around the planet. Finally, on May 5th, America responded by sending Alan Shepard into space, but he only made a suborbital flight.

Few would have predicted then that just five years later the United States would not only catch the Soviets in space but surpass them on the way to the moon. Perhaps that is the greatness of John F. Kennedy, who found in such a moment not despair, but opportunity. When Kennedy spoke to Congress on May 25th, 55 years ago, NASA hadn’t even flown an astronaut into orbit. Yet he declared the U.S. would go to the moon before the end of the decade.

“No single space project in this period will be more exciting, or more impressive, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish,” Kennedy told Congress. “In a very real sense it will not be one man going to the moon, it will be an entire nation. For all of us must work to put him there.”

Read 14 remaining paragraphs | Comments

How Overwatch Became a Rarity: The Troll-Free Online Shooter

Wired - Mon, 30/05/2016 - 14:00
Overwatch's lead designer opens up on how Blizzard crafted Overwatch not only to appeal to families, but create a happier player community. The post How Overwatch Became a Rarity: The Troll-Free Online Shooter appeared first on WIRED.

Why can’t the Estonian president buy a song off iTunes for his Latvian wife?

Ars Technica - Mon, 30/05/2016 - 13:00

President Toomas Hendrik Ilves, in conversation with Cyrus Farivar. Filmed by Chris Schodt/Edited by Jennifer Hahn. (video link)

PALO ALTO, Calif.—I don’t usually dress up for interviews, but I also don’t usually interview heads of state, either.

On a recent afternoon, I waited patiently in a generic conference room with yellow-tinted walls at the Westin Hotel, dressed in a grey suit and a tie, eagerly anticipating the arrival of Estonian President Toomas Hendrik Ilves. My videographer, Chris Schodt, busily set up his camera and light rig.

Read 20 remaining paragraphs | Comments

Daniel Stender: My work for Debian in May

Planet Debian - Mon, 30/05/2016 - 12:55

No double posting this time ;-)

I've got not so much spare time this month to spend on Debian, but I could work on the following packages:

  • golang-github-hpcloud-tail/1.0.0+git20160415.b294095-3: put versioned dependency & rebuild against golang-fsnotify/1.3.0-3 to fix FTBFS on ppc64el.

  • updates: packer/0.10.1-1, pybtex/0.20.1-1, afl/2.12b-1, afl/2.13b-1, pyutilib/5.3.5-1.

  • new packages: golang-github-azure-go-ntlmssp/0.0~git20160412.e0b63eb-1 (needed by Packer 0.10.1), and python-latexcodec/1.0.3-1 (needed by Pybtex 0.20).

  • prospector/0.11.7-7 fixed for reproducible builds: there were variations in the sorting order of dependencies in prospector.egg-info/requires.txt. I've prepared a patch to make the package reproducible again (that problem began with 0.11.7-5) before the proposed toolchain patch for setuptools (#804249) gets accepted.

  • python-latexcodec/1.0.3-3 also fixed for reproducible builds (#824454).

This series of blog postings also includes little introductions of and into new packages in the archive. This month there is:

Pyinfra

Pyinfra is a new project which is currently still in development state. It has been already pointed out in an interesting German article1, and is now available as package maintained within the Python Applications Team. It's currently a one man production by Nick Barrett, and eagerly developed in the past weeks (we're currently at 0.1~dev24).

Pyinfra is a remote server configuration/provisioning/service deployment tool which belongs in the same software category like Puppet or Ansible2. It's for provisioning one or an array of remote servers with software packages and to configure them. Pyinfra runs agentless like Ansible, that means for using it nothing special (like a daemon) has to run on targeted servers. It's written to be used for provisioning POSIX compatible Linux systems and has alternatives when it comes to special features like package managers (e.g. supports apt as well as yum). The documentation could be found in usr/share/doc/pyinfra/html/.

Here's a little crash course on how to use Pyinfra: The pyinfra CLI tool is used on the command line like this, deploy scripts, single operations or facts (see below) could be used on a single server or a multitude of remote servers:

$ pyinfra -i <inventory script/single host> <deploy script> $ pyinfra -i <inventory script/single host> --run <operation> $ pyinfra -i <inventory script/single host> --facts <fact>

Remote servers which are operated on must provide a working shell and they must be reachable by SSH. For connecting, --port, --user, --password, --key/--key-password and --sudo flags are available, --sudo to gain superuser rights. Root access or sudo rights of course have to be already set up. By the way, localhost could be operated on the same way.

Single operations are organized in modules like "apt", "files", "init", "server" etc. With the --run option they could be used individually on servers like follows, e.g. server.user adds a new user on a single targeted system (-v adds verbosity to the pyinfra run):

$ pyinfra -i 192.0.2.10 --run server.user sam --user root --key ~/.ssh/sshkey --key-password 123456 -v

Multiple servers can be grouped in inventories, which hold the targeted hosts and data associated with them, like e.g. an inventory file farm1.py would contain lists like this:

COMPUTE_SERVERS = ['192.0.2.10', '192.0.2.11'] DATABASE_SERVERS = ['192.0.2.20', '192.0.2.21']

Group designators must be all caps. A higher level of grouping are the file names of inventory scripts, thus COMPUTE_SERVERS and DATABASE_SERVERS can be referenced to at the same time by the group designator farm1. Plus, all servers are automatically added to the group all. And, inventory scripts should be stored in the subfolder inventory/ in the project directory. Inventory files then could be used instead of specific IP addresses like this, the single operation then gets performed on all given machines in farm1.py:

$ pyinfra -i inventory/farm1.py --run server.user sam --user root --key ~/.ssh/sshkey --key-password=123456 -v

Deployment scripts could be used together with group data files in the subfolder group_data/ in the project directory. For example, a group_data/farm1.py designates all servers given in inventory/farm1.py (by the way, all.py designates all servers), and contains the random attribute user_name (attributes must be lowercase), next to authentication data for the whole inventory group:

user_name = 'sam' ssh_user = 'root' ssh_key = '~/.ssh/sshkey' ssh_key_password = '123456'

The random attribute can be picked up by a deployment script using host.data() like follows, user_name could be used again for e.g. server.user(), like this:

from pyinfra import host from pyinfra.modules import server server.user(host.data.user_name)

This deploy, the ensemble of inventory file, group data file and deployment script (usually placed top level in the project folder) then could be run that way:

$ pyinfra -i inventory/farm1.py deploy.py

You have guessed it, since deployment scripts are Python scripts they are fully programmable (please regard that Pyinfra is build & runs on Python 3 on Debian), and that's the main advantage point with this piece of software.

Quite handy for that come Pyinfra facts, functions which check different things on remote systems and return information as Python data. Like e.g. deb_packages returns a dictionary of installed packages from a remote apt based server:

$ pyinfra -i 192.0.2.10 --fact deb_packages --user root --key ~/.ssh/sshkey --key-password=123456 { "192.0.2.10": { "libdebconfclient0": "0.192", "python-debian": "0.1.27", "libavahi-client3": "0.6.31-5", "dbus": "1.8.20-0+deb8u1", "libustr-1.0-1": "1.0.4-3+b2", "sed": "4.2.2-4+b1",

Using facts, Pyinfra reveals its full potential. For example, a deployment script could go like this, linux.distribution() returns a dict containing the installed distribution:

from pyinfra import host from pyinfra.modules import apt if host.fact.linux_distribution['name'] == 'Debian': apt.packages(packages='gummi', present=True, update=True) elif host.fact.linux_distribution['name'] == 'CentOS': pass

I'll spare more sophisticated examples to keep this introduction simple. Beyond fancy deployment scripts, Pyinfra features an own API by which it could be programmed from the outside, and much more. But maybe that's enough to introduce Pyinfra. That are the usage basics.

Pyinfra is a brand new project and it remains to be seen whether the developer can keep on further developing the tool like he does these days. For a private project it's insane to attempt to become a contender for the established "big" free configuration management tools and frameworks, but, if Puppet has become too complex in the meanwhile or not3, I really don't think that's the point here. Pyinfra follows an own approach in being programmable the way which it is. And it's definitely not harm to have it in the toolbox already, not trying to replace nothing.

Brainstorm

After the first package has been in experimental, the Brainstorm library from Swiss AI research institute IDSIA4 is now available as python3-brainstorm in unstable. Brainstorm is a lean, easy-to-use library for setting up deep learning networks (multiple layered artificial neural networks) for machine learning applications like for image and speech recognition or natural language processing. To set up a working training network for a classifier for handwritten digits like the MNIST dataset (a usual "hello world") just takes a couple of lines, like an example demonstrates. The package is maintained within the Debian Python Modules Team.

The Debian package ships a couple of examples in /usr/share/python3-brainstorm/examples (the data/ and examples/ folders of the upstream tarball are combined here). Among them there are5:

  • scripts for creating proper HDF5 training data of the MNIST database of handwritten digits and for training a simple neural network on it (create_mnist.py, mnist_pi.py),

  • examples for setting up data and training a convolutional neural network (CNN) on the CIFAR-10 dataset of pictures (create_cifa10.py, cifar10_cnn.py),

  • as well as example scripts for setting up training data and creating a LSTM (Long short-term memory) recurrent neural network (RNN) on test data used in the Hutter Prize competition (create_hutter.py, hutter_lstm.py).

  • And there's also another example script for creating training data of the CIFAR-100 dataset (create_cifar100.py).

The current documentation in /usr/share/doc/python3-brainstorm/html/ isn't complete yet (several chapters are under construction), but there's a walkthrough on the CIFAR-10 example. The MNIST example has been extended by Github user pinae, and has been explained in German C't recently6.

What are the perspectives for further development? Like Zhou Mo confirmed, there are a couple of deep learning frameworks around having a rather poor outlook since there have been abandoned after being completed as PhD projects. There's really no point for thriving to have them all in Debian, like the ITP of Minerva has been given up partly for this reason, there weren't any commits since 08/2015 (and because cuDNN isn't available and most likely won't). Brainstorm, 0.5 have been released 05/2015, also was a PhD project as IDSIA. It's stated on Github that the project is "under active development", but the rather sparse project page on the other side expresses the "hope the community will help us to further improve Brainstorm". This sentence much often implies that the developers are not actively working on the project. But there are recent commits and it looks that upstream is active and could be reached when there are problems, and that the project is active. So I don't think we're riding a dead horse, here.

The downside for Brainstorm in Debian is, it seems that the libraries which are needed for GPU accelerated processing can't be fully provided. Pycuda is available, but scikit-cuda (an additional library which provides wrappers for CUDA features like CUBLAS, CUFFT and CUSOLVER) is not and won't be, because the CULA Dense Toolkit (scikit-cuda also contains wrappers for also that) is not available freely as source. Because of that, a dependency against pycuda, not even as Suggests (it's non-free), has been spared. Without GPU acceleration, Brainstorm computes the matrices on openBLAS using a Cython wrapper on the NumpyHandler, and the PyCudaHandler couldn't be used. openBLAS makes pretty good use of the available hardware (it distributes over all available CPU cores), but it's not yet possible to run Brainstorm full throttle using available floating point devices to reduce training times, which becomens rucial when the projects are getting bigger.

Brainstorm belongs to the number of deep learning frameworks already being or becoming available in Debian. Currently there is:

  • Caffe for image recognition resp. classification7 is just around the corner (#823140).

  • Theano is currently in experimental, and will be ready together with libgpuarray (OpenCL based GPU accelerated processing) and Keras (abstraction layer) for Stretch. It could already run on NVIDIA graphics card via CUDA8 (limited to amd64 and ppc64el, though).

  • Lasagne, the somewhat higher-leveled abstraction layer for Theano is RFP (#818641).

  • Google's Tensorflow, the free successor of Dist-Belief, is currently on ITP (#804612). It's waiting for Google's build system Bazel to become available.

  • Torch is also ITP (#794634). It's blocked by a wishlist bug on dh-lua to get closed.

  • Amazon's own machine learning workhorse dsstne ("destiny") is now also put under a free license and also will becoming available (#824692) in the foreseeable future for Debian (contrib). It's not yet for image recognition applications, though (lacks CNN).

  • Mxnet is RFP (#808235).

I've checked over Microsoft's CNTK, but although it's also set free recently I have my doubts if that could be included. Apparently there are dependencies against non-free software and most likely other issues. So much for a little update on the state of deep learning in Debian, please excuse if my radar misses something.

  1. Tim Schürmann: "Schlangenöl: Automatisiertes Service-Deployment mit Pyinfra". In: IT-Administrator 05/2016, pp. 90-95. 

  2. For a comparison of configuration management software like this, see Bößwetter/Johannsen/Steig: "Baukastensysteme: Konfigurationsmanagement mit Open-Source-Software". In: iX 04/2016, pp. 94-99 (please excuse the prevalence of German articles in the pointers, I've just have them at hand). 

  3. On the points of critique on Puppet, see Martin Loschwitz: "David gegen Goliath – Zwei Welten treffen aufeinander: Puppet und Ansible". In Linux-Magazin 01/2016, 50-54. 

  4. See the interview with IDSIA's deep learning guru Jürgen Schmidhuber in German C't 2014/09, p. 148 

  5. The examples scripts need some more finetuning. To run the data creation scripts in place the environment variable BRAINSTORM_DATA_DIR could be set, but the trained networks are currently tried to write in place. So please copy the scripts into some workspace if you want to try them out. I'll patch the example scripts to run out-of-the-box, soon. 

  6. Johannes Merkert: "Ziffernlerner. Ein künstliches neuronales Netz selber gebaut". In: C't 2016/06, p. 142-147. Web: http://www.heise.de/ct/ausgabe/2016-6-Ein-kuenstliches-neuronales-Netz-selbst-gebaut-3118857.html (hehe, was ist "Gradientenabstieg" für ein Tag?) 

  7. See Ramon Wartala: "Tiefenschärfe: Deep learning mit NVIDIAs Jetson-TX1-Board und dem Caffe-Framework". In: iX 06/2016, pp. 100-103 

  8. https://lists.debian.org/debian-science/2016/03/msg00016.html 

The Greatest Spectacle in Racing turns 100: The 2016 Indy 500

Ars Technica - Mon, 30/05/2016 - 12:30

(credit: Aurich Lawson)

INDIANAPOLIS—When it comes to American sporting traditions, there are few events as storied as the Indianapolis 500. It's a 500-mile test of speed, endurance, and bravery that takes place at the end of May. It takes place at the Indianapolis Motor Speedway, a 2.5-mile (4km) race track that's not only the oldest of its kind but also the largest sporting venue anywhere on Earth. And this year's Indy 500 is a special one—it's the race's 100th running. With speeds well in excess of 200mph (321km/h), it's the fastest race on the motorsports calendar, and this year Ars was in attendance along with more than 350,000 other race fans to take in what's often called the greatest spectacle in racing.

The Track

As we'll see, the cars have changed a lot over the course of those hundred runnings. And the race has gone through good times—with crowds topping 400,000—and bad. There's been innovation and more than its fair share of tragedy. But throughout it all the track has remained a constant. Well, almost.

The Indianapolis Motor Speedway was built in 1909 by Carl Fisher, who wanted to create a venue for the nascent American auto industry to test its new-fangled creations. Initially, all 2.5 miles of the track's surface were made of crushed stone, something that proved conducive to a series of fatal accidents that started with the inaugural car race held on August 19 of that year. As the death toll mounted over the next few days, Fisher and his partners made the wise decision to pave it. They opted for bricks—more than 3.2 million of them—leading locals to dub the speedway "the Brickyard."

Read 22 remaining paragraphs | Comments

Syndicate content