Get 20M+ Full-Text Papers For Less Than $1.50/day. Subscribe now for You or Your Team.

Learn More →

Unconventional computing

Unconventional computing cerf's up DOI:10.1145/2666093 Vinton G. Cerf Unconventional Computing The August 2014 issue of IEEE Spectrum had two articles of interest related to computing: "Silicon's Second Act" and "Spin Memory Shows Its Might." On top of that, in the last couple of years, IBM has demonstrated two remarkable achievements: The Watson Artificial Intelligence system and the August 8, 2014 cover story of Science entitled "Brain Inspired Chip." The TrueNorth chipset and the programming language it uses have demonstrated remarkable power efficiency compared to more conventional processing elements. What all of these topics have in common for me is the prospect of increasingly unconventional computing methods that may naturally force us to rethink how we analyze problems for purposes of getting computers to solve them for us. I consider this to be a refreshing development, challenging the academic, research, and practitioner communities to abandon or adapt past practices and to consider new ones that can take advantage of new technologies and techniques. It has always been my experience that half the battle in problem solving is to express the problem in such a way the solution may suggest itself. In mathematics, it is often the case that a change of variables can dramatically restructure the way in which the problem or formula is presented; leading one to find related problems whose solutions may be more readily applied. Changing from Cartesian to Polar coordinates often dramatically simplifies its expression. For example, a Cartesian equation for a circle centered at (0,0) is X2 + Y2 = Z2 but the polar version is simply r()= a for some value of a. It may prove to be the case that the computational methods for solving problems with quantum computers, neural chips, and Watson-like systems will admit very different strategies and tactics than those applied in more conventional architectures. The use of graphics processing units (GPUs) to solve problems, rather than generating textured triangles at high speed, has already forced programmers to think differently about the way in which they express and compute their results. The parallelism of the GPUs and their ability to process many small "programs" at once has made them attractive for evolutionary or genetic programming, for example. One question is: Where will these new technologies take us? We have had experiences in the past with unusual designs. The Connection Machine designed by Danny Hillis was one of the first really large-scale computing machines (65K one-bit processors) hyperconnected together. LISP was one of the programming languages used for the Connection Machines along with URDU, among others. This brings to mind the earlier LISP machines made by Symbolics and LISP Machines, Inc., among others. The rapid advance in speed of more conventional processors largely overtook the advantage of special purpose, potentially language-oriented computers. This was particularly evident with the rise of the so-called RISC (Reduced Instruction Set Computing) machines developed by John Hennessy (the MIPS system) and David Patterson (Berkeley RISC and Sun Microsystems SPARC), among many others. David E. Shaw, at Columbia University, pioneered one of the explorations into a series of designs of a single instruction stream, multiple data stream (SIMD) supercomputer he called Non-Von (for "non-Von-Neumann"). Using single-bit arithmetic logic units, this design has some relative similarity to the Connection Machine although their interconnection designs were quite different. It has not escaped my attention that David Shaw is now the chief scientist of D.E. Shaw Research and is focused on computational biochemistry and bioinformatics. This topic also occupies his time at Columbia University, where he holds a senior research fellowship and adjunct professorship. Returning to new computing and memory technologies, one has the impression the limitations of conventional use of silicon technology may be overcome with new materials and with new architectural designs as is beginning to be apparent with the new IBM Neural chip. I have only taken time to offer an very incomplete and sketchy set of observations about unconventional computing in this column, but I think it is arguable that in this second decade of the 21st century, we are starting to see serious opportunities for rethinking how we may compute. Vinton G. Cerf is vice president and Chief Internet Evangelist at Google. He served as ACM president from 2012­2014. Copyright held by author. O C TO B E R 2 0 1 4 | VO L. 57 | N O. 1 0 | C OM M U N IC AT ION S OF THE ACM http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Communications of the ACM Association for Computing Machinery

Unconventional computing

Communications of the ACM , Volume 57 (10) – Sep 23, 2014

Loading next page...
 
/lp/association-for-computing-machinery/unconventional-computing-K9KwZWvYwe

References (27)

Publisher
Association for Computing Machinery
Copyright
Copyright © 2014 by ACM Inc.
ISSN
0001-0782
DOI
10.1145/2666093
Publisher site
See Article on Publisher Site

Abstract

cerf's up DOI:10.1145/2666093 Vinton G. Cerf Unconventional Computing The August 2014 issue of IEEE Spectrum had two articles of interest related to computing: "Silicon's Second Act" and "Spin Memory Shows Its Might." On top of that, in the last couple of years, IBM has demonstrated two remarkable achievements: The Watson Artificial Intelligence system and the August 8, 2014 cover story of Science entitled "Brain Inspired Chip." The TrueNorth chipset and the programming language it uses have demonstrated remarkable power efficiency compared to more conventional processing elements. What all of these topics have in common for me is the prospect of increasingly unconventional computing methods that may naturally force us to rethink how we analyze problems for purposes of getting computers to solve them for us. I consider this to be a refreshing development, challenging the academic, research, and practitioner communities to abandon or adapt past practices and to consider new ones that can take advantage of new technologies and techniques. It has always been my experience that half the battle in problem solving is to express the problem in such a way the solution may suggest itself. In mathematics, it is often the case that a change of variables can dramatically restructure the way in which the problem or formula is presented; leading one to find related problems whose solutions may be more readily applied. Changing from Cartesian to Polar coordinates often dramatically simplifies its expression. For example, a Cartesian equation for a circle centered at (0,0) is X2 + Y2 = Z2 but the polar version is simply r()= a for some value of a. It may prove to be the case that the computational methods for solving problems with quantum computers, neural chips, and Watson-like systems will admit very different strategies and tactics than those applied in more conventional architectures. The use of graphics processing units (GPUs) to solve problems, rather than generating textured triangles at high speed, has already forced programmers to think differently about the way in which they express and compute their results. The parallelism of the GPUs and their ability to process many small "programs" at once has made them attractive for evolutionary or genetic programming, for example. One question is: Where will these new technologies take us? We have had experiences in the past with unusual designs. The Connection Machine designed by Danny Hillis was one of the first really large-scale computing machines (65K one-bit processors) hyperconnected together. LISP was one of the programming languages used for the Connection Machines along with URDU, among others. This brings to mind the earlier LISP machines made by Symbolics and LISP Machines, Inc., among others. The rapid advance in speed of more conventional processors largely overtook the advantage of special purpose, potentially language-oriented computers. This was particularly evident with the rise of the so-called RISC (Reduced Instruction Set Computing) machines developed by John Hennessy (the MIPS system) and David Patterson (Berkeley RISC and Sun Microsystems SPARC), among many others. David E. Shaw, at Columbia University, pioneered one of the explorations into a series of designs of a single instruction stream, multiple data stream (SIMD) supercomputer he called Non-Von (for "non-Von-Neumann"). Using single-bit arithmetic logic units, this design has some relative similarity to the Connection Machine although their interconnection designs were quite different. It has not escaped my attention that David Shaw is now the chief scientist of D.E. Shaw Research and is focused on computational biochemistry and bioinformatics. This topic also occupies his time at Columbia University, where he holds a senior research fellowship and adjunct professorship. Returning to new computing and memory technologies, one has the impression the limitations of conventional use of silicon technology may be overcome with new materials and with new architectural designs as is beginning to be apparent with the new IBM Neural chip. I have only taken time to offer an very incomplete and sketchy set of observations about unconventional computing in this column, but I think it is arguable that in this second decade of the 21st century, we are starting to see serious opportunities for rethinking how we may compute. Vinton G. Cerf is vice president and Chief Internet Evangelist at Google. He served as ACM president from 2012­2014. Copyright held by author. O C TO B E R 2 0 1 4 | VO L. 57 | N O. 1 0 | C OM M U N IC AT ION S OF THE ACM

Journal

Communications of the ACMAssociation for Computing Machinery

Published: Sep 23, 2014

There are no references for this article.