prostheticknowledge:

The Parallax View

Crazy online music video experience from 上坂すみれ - what seems like a normal embedded YouTube video becomes decorated with additional video game-esque graphics once you start play. There are even sections which are games themselves where you hit falling enemies with your mouse pointer. All of this, and hosted on a Tumblr blog as well …

You can check this out here

prostheticknowledge:

Holly Herndon - Chorus

Net art music video directed by Akihiko Taniguchi who created 3D models of desktops using software and collections of user-submitted photographs - video embedded below:

You can even view the models of these desktop spaces in your browser at Akihiko’s site here

1 of 20 jobs in the future (click for slideshare)

1 of 20 jobs in the future (click for slideshare)

"Mount Rushmore of the digital age"

"Mount Rushmore of the digital age"

Transom

Transom

juliakaganskiy:

There’s something really appealing about artist-designed video games as music videos. Check out this one for Vinyl Williams’ “Stellarscope” from Lionel Williams.

Also good is this playable album for Gatekeeper’s Exo from Tabor Robak.

(Source: viralnova.com)

dbreunig:


Introducing Reporter, an app which helps you track your life so that you might understand it better.
Nicholas Felton and I have been working on various iterations of this app since 2011, testing and tweaking it to capture the most data with the least amount of hassle and present it in the most insightful way.
Click through and sign up if you’d like to be notified when Reporter is available.
Stay tuned for more details.

dbreunig:

Introducing Reporter, an app which helps you track your life so that you might understand it better.

Nicholas Felton and I have been working on various iterations of this app since 2011, testing and tweaking it to capture the most data with the least amount of hassle and present it in the most insightful way.

Click through and sign up if you’d like to be notified when Reporter is available.

Stay tuned for more details.

Otis Ferguson and the way of the Camera, Bordwell and Thompson

Otis Ferguson and the way of the Camera, Bordwell and Thompson

Google no longer understands how its “deep learning” decision-making computer systems have made themselves so good at recognizing things in photos.

This means the internet giant may need fewer experts in future as it can instead rely on its semi-autonomous, semi-smart machines to solve problems all on their own.

The claims were made at the Machine Learning Conference in San Francisco on Friday by Google software engineer Quoc V. Le in a talk in which he outlined some of the ways the content-slurper is putting “deep learning” systems to work.

"Deep learning" involves large clusters of computers ingesting and automatically classifying data, such as pictures. Google uses the technology for services like Android voice-controlled search, image recognition, and Google translate, among others. […]

What stunned Quoc V. Le is that the machine has learned to pick out features in things like paper shredders that people can’t easily spot – you’ve seen one shredder, you’ve seen them all, practically. But not so for Google’s monster.

Learning “how to engineer features to recognize that that’s a shredder – that’s very complicated,” he explained. “I spent a lot of thoughts on it and couldn’t do it.” […]

This means that for some things, Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. This “thinking” is within an extremely narrow remit, but it is demonstrably effective and independently verifiable.