• by cs702 on 12/11/2017, 1:20:07 PM

    TPUs are only one part of this eye-opening presentation. Skip to page 28, where Jeff starts talking about:

    * Using reinforcement learning so the computer can figure out how to parallelize code and models on its own. In experiments, the machine beats human-designed parallelization.

    * Replacing B-tree indices, hash maps, and Bloom filters with data-driven indices learned by deep learning models. In experiments, the learned indices outperform the usual stalwarts by a large margin in both computing cost and performance, and are auto-tuning.

    * Using reinforcement learning to manage datacenter power. Machine intelligence outperforms human-designed energy-management policies.

    * Using machine intelligence to replace user-tunable performance options in all software systems, eliminating the need to tweak them with command line parameters like --num-threads=16, --max-memory-use=104876, etc. Machine intelligence outperforms hand-tuning.

    * Using machine intelligence for all tasks currently managed with heuristics. For example, in compilers: instruction scheduling, register allocation, loop nest parallelization strategies, etc.; in networking: TCP window size decisions, backoff for retransmits, data compression, etc.; in operating systems: process scheduling, buffer cache insertion/replacement, file system prefetching, etc.; in job scheduling systems: which tasks/VMs to co-locate on same machine, which tasks to pre-empt, etc.; in ASIC design: physical circuit layout, test case selection, etc. Machine intelligence outperforms human heuristics.

    IN SHORT: machine intelligence (today, that means deep learning and reinforcement learning) is going to penetrate and ultimately control EVERY layer of the software stack, replacing human engineering with auto-tuning, self-improving, better-performing code.

    Eye-opening.

  • by cobookman on 12/11/2017, 6:07:05 AM

    Nvidia Titan V can do 110 TFLOPS, 12GB of 1.7 Gb/s Memory [1] and sells for 3,000$. TPU v2 does 180 TFLOPS, 64GB of 19.2Gb/s Memory [2].

    That's a heck of a performance boost for a chip that's likely costing google way less than the nvidia flagship.

    [1] http://www.tomshardware.com/news/nvidia-titan-v-110-teraflop...

  • by jamesblonde on 12/11/2017, 7:55:26 AM

    Great talk, with lots of new insights into what's happening at Google. I really think his point that ImageNet is the new Mnist now holds true. Even research labs should be buying DeepLearning11 servers (10 x 1080Ti) for $15k, and training large models in a reasonable amount of time. It may seem that Google are way ahead, but they are just doing synchronous SGD, and it was interesting to see the drop in prediction accuracy from 128 TPU2 cores to 256 TPU2 cores for ImageNet (76 -> 75% accuracy). So, the algorithms for dist. training aren't unknown, and with cheap hardware like the DL11 server, many well-financed research groups can compete with this.

  • by larelli on 12/11/2017, 7:38:17 AM

    It looks like this paper has more information: https://arxiv.org/pdf/1712.01208v1.pdf

  • by EvgeniyZh on 12/11/2017, 6:32:38 AM

    Was it filmed? If yes, when video will be available?

  • by nickpsecurity on 12/11/2017, 6:23:32 PM

    Great presentation. Far as application, I already thought this might be useful in lightweight, formal methods to spot problems and suggest corrections for failures in Rust's borrow checkers, separation logic on C programs, proof tactics, and static analysis tooling. For Rust example, the person might try to express a solution in the language that fails the borrow checker. If they can't understand why, they submit it to the system that attempts to spot where the problem is. The system might start with humans spotting it and restructuring the code to pass borrow checker. Every instance of those will feed into the learning system that might eventually do that on its own. There's also potential to use automated, equivalence checks/tests between user-submitted code and the AI's suggestions to help human-in-the-loop decide if it's worth review before passing onto the other person.

    In hardware, both digital and analog designers seem to use lots of heuristics in how they design things. Certainly could help there. Might be especially useful in analog due to small number of experienced engineers available.

  • by yeukhon on 12/11/2017, 4:17:38 AM

    While this is a collective work, honestly, after hearing about JD for so many years: is there anything he CAN’T do?

  • by 1024core on 12/10/2017, 9:38:42 PM

    This is some really cool stuff, I hope this submission gets more upvotes and reaches a wider audience.

  • by novaRom on 12/11/2017, 11:54:29 AM

    I speculate that Google will sell TPUv2 for as less as 500 USD per PCIe card already in 2018. Nvidia's Volta TensorCores are essentially the same: 32-bit accumulators and 16-bit multipliers, but GPUs are more general-purpose which is not necessary for Deep Learning since most intensive operation is dot-product (y+=w*x).

  • by nl on 12/11/2017, 1:14:52 AM

    That "Learned Index Structures" makes it pretty clear that Karpathy was right in his widely criticized "Software 2.0" piece.