ABSTRACT
The APL notation would appear to be a clear match for convolutional neural networks, but traditional implementations of APL have lagged behind the performance of highly tuned, specialized frameworks designed to execute CNNs on the GPU. Moreover, most demonstrations of APL for neural networking have involved relatively small examples. We explore a more complex example in the U-net architecture and utilize a modern APL compiler with GPU support, Co-dfns, to compare the state of the art of APL against the current crop of specialized neural network frameworks in the form of PyTorch. We compare performance as well as the language design of APL for neural network programming and the clarity and transparency of the resulting code.
We found that the complete “from scratch” APL source was on par with the complexity of the PyTorch reference implementation, albeit more foreign, while being more concise and complete. We also found that when compiled with Co-dfns, despite the naïve implementation both of Co-dfns and our own code, performance on the GPU and the CPU were within a factor of 2.2 - 2.4 times that of the PyTorch implementation. We believe this suggests significant avenues of future exploration for machine learning language design, pedagogy, and implementation, both inside and outside of the APL community.
- Manuel Alfonseca. 1990. Neural Networks in APL. SIGAPL APL Quote Quad, 20, 4 (1990), may, 2–6. issn:0163-6006 https://doi.org/10.1145/97811.97816
Google Scholar
Digital Library
- Jeff Bezanson, Alan Edelman, Stefan Karpinski, and Viral B Shah. 2017. Julia: A fresh approach to numerical computing. SIAM review, 59, 1 (2017), 65–98.
Google Scholar
- Albert Cardona, Stephan Saalfeld, Stephan Preibisch, Benjamin Schmid, Anchi Cheng, Jim Pulokas, Pavel Tomancak, and Volker Hartenstein. 2010. An Integrated Micro- and Macroarchitectural Analysis of the Drosophila Brain by Computer-Assisted Serial Section Electron Microscopy. PLOS Biology, 8, 10 (2010), 10, 1–17. https://doi.org/10.1371/journal.pbio.1000502
Google Scholar
Cross Ref
- Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. 248–255.
Google Scholar
Cross Ref
- Pedro Domingos. 2012. A few useful things to know about machine learning. Commun. ACM, 55, 10 (2012), 78–87.
Google Scholar
Digital Library
- Vincent Dumoulin and Francesco Visin. 2016. A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
Google Scholar
- Charles R. Harris, K. Jarrod Millman, Stéfan J van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. 2020. Array programming with NumPy. Nature, 585 (2020), 357–362. https://doi.org/10.1038/s41586-020-2649-2
Google Scholar
Cross Ref
- Troels Henriksen, Niels GW Serup, Martin Elsman, Fritz Henglein, and Cosmin E Oancea. 2017. Futhark: purely functional GPU-programming with nested parallelism and in-place array updates. In Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation. 556–571.
Google Scholar
Digital Library
- Aaron Wen-yao Hsu. 2019. A data parallel compiler hosted on the gpu. Ph. D. Dissertation. Indiana University.
Google Scholar
- Roger Hui. 2017. Stencil Lives. https://www.dyalog.com/blog/2017/07/stencil-lives/
Google Scholar
- Roger Hui. 2020. Towards Improvements to Stencil. https://www.dyalog.com/blog/2020/06/towards-improvements-to-stencil/
Google Scholar
- Roger KW Hui and Morten J Kromberg. 2020. APL since 1978. Proceedings of the ACM on Programming Languages, 4, HOPL (2020), 1–108.
Google Scholar
Digital Library
- Kenneth E Iverson. 1962. A programming language. In Proceedings of the May 1-3, 1962, spring joint computer conference. 345–351.
Google Scholar
- Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional Architecture for Fast Feature Embedding. arXiv preprint arXiv:1408.5093.
Google Scholar
- JSoftware. 2014. Vocabulary/semidot. https://code.jsoftware.com/wiki/Vocabulary/semidot
Google Scholar
- D Knuth. 1993. Computer literacy bookshops interview. Also available as http://yurichev. com/mirrors/C/knuth-interview1993. txt.
Google Scholar
- Donald E Knuth. 2007. Computer programming as an art. In ACM Turing award lectures. 1974.
Google Scholar
- Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25 (2012).
Google Scholar
- Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proc. IEEE, 86, 11 (1998), 2278–2324.
Google Scholar
Cross Ref
- Bernard Legrand. 2009. Mastering Dyalog APL (1 ed.). Dyalog Ltd..
Google Scholar
- Inc Math Works. 1992. MATLAB reference guide. Math Works, Incorporated.
Google Scholar
- Chigozie Nwankpa, Winifred Ijomah, Anthony Gachagan, and Stephen Marshall. 2018. Activation functions: Comparison of trends in practice and research for deep learning. arXiv preprint arXiv:1811.03378.
Google Scholar
- Keiron O’Shea and Ryan Nash. 2015. An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458.
Google Scholar
- Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32. Curran Associates, Inc., 8024–8035. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
Google Scholar
Digital Library
- Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention. 234–241.
Google Scholar
Cross Ref
- Dominik Scherer, Andreas Müller, and Sven Behnke. 2010. Evaluation of pooling operations in convolutional architectures for object recognition. In International conference on artificial neural networks. 92–101.
Google Scholar
Cross Ref
- Rodrigo Girão Serrão. 2022. Transposed convolution. https://mathspp.com/blog/til/033##transposed-convolution
Google Scholar
- Artjoms Šinkarovs, Robert Bernecky, and Sven-Bodo Scholz. 2019. Convolutional neural networks in APL. In Proceedings of the 6th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming. 69–79.
Google Scholar
Digital Library
- Duc Minh Tran, Troels Henriksen, and Martin Elsman. 2019. Compositional Deep Learning in Futhark. In Proceedings of the 8th ACM SIGPLAN International Workshop on Functional High-Performance and Numerical Computing (FHPNC 2019). Association for Computing Machinery, New York, NY, USA. 47–59. isbn:9781450368148 https://doi.org/10.1145/3331553.3342617
Google Scholar
Digital Library
- Guido Van Rossum and Fred L. Drake. 2009. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA. isbn:1441412697
Google Scholar
- Artjoms Šinkarovs, Hans-Nikolai Vieß mann, and Sven-Bodo Scholz. 2021. Array Languages Make Neural Networks Fast. In Proceedings of the 7th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming (ARRAY 2021). Association for Computing Machinery, New York, NY, USA. 39–50. isbn:9781450384667 https://doi.org/10.1145/3460944.3464312
Google Scholar
Digital Library
Index Terms
U-Net CNN in APL: Exploring Zero-Framework, Zero-Library Machine Learning
Recommendations
Convolutional neural networks in APL
ARRAY 2019: Proceedings of the 6th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array ProgrammingThis paper shows how a Convolutional Neural Network (CNN) can be implemented in APL. Its first-class array support ideally fits that domain, and the operations of APL facilitate rapid and concise creation of generically reusable building blocks. For our ...
Performance Evolution of Different SYCL Implementations based on the Parallel Least Squares Support Vector Machine Library
IWOCL '23: Proceedings of the 2023 International Workshop on OpenCLIn machine learning and scientific computing, some of the biggest challenges are efficient and performant portable computing. With our Parallel Least Squares Support Vector Machine (PLSSVM) library, we have not only developed an unrivaled Support ...
Compiling a Subset of APL Into a Typed Intermediate Language
ARRAY'14: Proceedings of ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array ProgrammingWe present a compiler and a typed intermediate language for a subset of APL. The intermediate language treats all numeric data as multi-dimensional arrays and the type system makes explicit the ranks of arrays. Primitive operators are polymorphic in ...
Comments